13 research outputs found

    Modulation-function-based finite-horizon sensor fault detection for salient-pole PMSM using parity-space residuals

    Get PDF
    An online model-based fault detection and isolation method for salient-pole permanent magnet synchronous motors over a finite horizon is proposed. The proposed approach combines parity-space-based residual generation and modulation-function-based filtering. Given the polynomial model equations, the unknown variables (i.e. the states, unmeasured inputs) are eliminated resulting in analytic redundancy relations used for residual generation. Furthermore, in order to avoid needing the derivatives of measured signals required by such analytic redundancy relations, a modulation-function-based evaluation is proposed. This results in a finite-horizon filtered version of the original residual. The fault detection and isolation method is demonstrated using simulation of various fault scenarios for a speed controlled salient motor showing the effectiveness of the presented approach

    Long-term dependency slow feature analysis for dynamic process monitoring

    Get PDF
    Industrial processes are large scale, highly complex systems. The complex flow of mass and energy, as well as the compensation effects of closed-loop control systems, cause significance cross-correlation and autocorrelation between process variables. To operate the process systems stably and efficiently, it is crucial to uncover the inherent characteristics of both variance structure and dynamic relationship. Long-term dependency slow feature analysis (LTSFA) is proposed in this paper to overcome the Markov assumption of the original slow feature analysis to understand the long-term dynamics of processes, based on which a monitoring procedure is designed. A simulation example and the Tennessee Eastman process benchmark are studied to show the performance of LTSFA. The proposed method can better extract the system dynamics and monitor the process variations using fewer slow features

    Multi-output soft sensor with a multivariate filter that predicts errors applied to an industrial reactive distillation process

    Get PDF
    The paper deals with the problem of developing a multi-output soft sensor for the industrial reactive distillation process of methyl tert-butyl ether production. Unlike the existing soft sensor approaches, this paper proposes using a soft sensor with filters to predict model errors, which are then taken into account as corrections in the final predictions of outputs. The decomposition of the problem of optimal estimation of time delays is proposed for each input of the soft sensor. Using the proposed approach to predict the concentrations of methyl sec-butyl ether, methanol, and the sum of dimers and trimers of isobutylene in the output product in a reactive distillation column was shown to improve the results by 32%, 67%, and 9.5%, respectively

    Modeling for the performance of navigation, control and data post-processing of underwater gliders

    Get PDF
    Underwater gliders allow efficient monitoring in oceanography. In contrast to buoys, which log oceanographic data at individual depths at only one location, gliders can log data over a period of up to one year by following predetermined routes. In addition to the logged data from the available sensors, usually a conductivity-temperature-depth (CTD) sensor, the depth-average velocity can also be estimated using the horizontal glider velocity and the GPS update in a dead-reckoning algorithm. The horizontal velocity is also used for navigation or planning a long-term glider mission. This paper presents an investigation to determine the horizontal glider velocity as accurately as possible. For this, Slocum glider flight models used in practice will be presented and compared. A glider model for a steady-state gliding motion based on this analysis is described in detail. The approach for estimating the individual model parameters using nonlinear regression will be presented. In this context, a robust method to accurately detect the angle of attack is presented and the requirements of the logged vehicle data for statistically verified model parameters are discussed. The approaches are verified using logged data from glider missions in the Indian Ocean from 2016 to 2018. It is shown that a good match between the logged and the modeled data requires a time-varying model, where the model parameters change with respect to time. A reason for the changes is biofouling, where organisms settle and grow on the glider. The proposed method for deciphering an accurate horizontal glider velocity could serve to improve the dead-reckoning algorithm used by the glider for calculating depth-average velocity and for understanding its errors. The depth-average velocity is used to compare ocean current models from CMEMS and HYCOM with the glider logged data

    Comparison of semirigorous and empirical models derived using data quality assessment methods

    Get PDF
    With the increase in available data and the stricter control requirements for mineral processes, the development of automated methods for data processing and model creation are becoming increasingly important. In this paper, the application of data quality assessment methods for the development of semirigorous and empirical models of a primary milling circuit in a platinum concentrator plant is investigated to determine their validity and how best to handle multivariate input data. The data set used consists of both routine operating data and planned step tests. Applying the data quality assessment method to this data set, it was seen that selecting the appropriate subset of variables for multivariate assessment was difficult. However, it was shown that it was possible to identify regions of sufficient value for modeling. Using the identified data, it was possible to fit empirical linear models and a semirigorous nonlinear model. As expected, models obtained from the routine operating data were, in general, worse than those obtained from the planned step tests. However, using the models obtained from routine operating data as the initial seed models for the automated advanced process control methods would be extremely helpful. Therefore, it can be concluded that the data quality assessment method was able to extract and identify regions sufficient and acceptable for modeling

    Optimal design of a photovaltaic station using Markov and energy price modelling

    Get PDF
    This paper addresses the optimization of photovoltaic (PV) systems to increase their efficiency. The study introduces a new pricing model that considers the current price of PV inverters. In addition, Markov modeling is used in a new optimization framework to determine the optimal configuration, considering the number of PV modules and inverters, operational constraints, and failure events of PV inverters up to 100 kW. A case study with six real PV inverters confirms the effectiveness of the proposed framework. It calculates the average daily hours of rated power generation considering geographic location, temperature, and solar irradiance using real data from a real PV system. The study identifies both local and global optimal solutions for PV inverters (15 kW to 100 kW), while minimizing the effective levelized cost of energy. The results of the study have important implications for future assessments of PV module failures and repairs

    Self-Adaptive Artificial Bee Colony for Function Optimization

    Get PDF
    Artificial bee colony (ABC) is a novel population-based optimization method, having the advantage of less control parameters, being easy to implement, and having strong global optimization ability. However, ABC algorithm has some shortcomings concerning its position-updated equation, which is skilled in global search and bad at local search. In order to coordinate the ability of global and local search, we first propose a self-adaptive ABC algorithm (denoted as SABC) in which an improved position-updated equation is used to guide the search of new candidate individuals. In addition, good-point-set approach is introduced to produce the initial population and scout bees. The proposed SABC is tested on 12 well-known problems. The simulation results demonstrate that the proposed SABC algorithm has better search ability with other several ABC variants

    Parameter Identification and Control Scheme for Monitoring Automatic Thickness Control System with Measurement Delay

    Get PDF
    The thickness of the steel strip is an important indicator of the overall strip quality. Deviations in thickness are primarily controlled using the automatic gauge control (AGC) system of each rolling stand. At the last stand, the monitoring AGC system is usually used, where the deviations in thickness can be directly measured by the X-ray thickness gauge device and used as the input to the AGC system. However, due to the physical distance between the thickness detection device and the rolling stand, time delay is unavoidably present in the thickness control loop, which can affect control performance and lead to system oscillations. Furthermore, the parameters of the system can change due to perturbations from external disturbances. Therefore, this paper proposes an identification and control scheme for monitoring AGC system that can handle time delay and parameter uncertainty. The cross-correlation function is used to estimate the time delay of the system, while the system parameters are identified using a recursive least squares method. The time delay and parameter estimates are then further refined using the Levenberg-Marquardt algorithm, so as to provide the most accurate parameter estimates for the complete system. Simulation results show that, compared with the standard Proportion Integration Differentiation (PID) controller approach, the proposed approach is not affected by changes in the time delay and parameter uncertainties

    A KPI-Based Probabilistic Soft Sensor Development Approach that Maximizes the Coefficient of Determination

    Get PDF
    Advanced technology for process monitoring and fault diagnosis is widely used in complex industrial processes. An important issue that needs to be considered is the ability to monitor key performance indicators (KPIs), which often cannot be measured sufficiently quickly or accurately. This paper proposes a data-driven approach based on maximizing the coefficient of determination for probabilistic soft sensor development when data are missing. Firstly, the problem of missing data in the training sample set is solved using the expectation maximization (EM) algorithm. Then, by maximizing the coefficient of determination, a probability model between secondary variables and the KPIs is developed. Finally, a Gaussian mixture model (GMM) is used to estimate the joint probability distribution in the probabilistic soft sensor model, whose parameters are estimated using the EM algorithm. An experimental case study on the alumina concentration in the aluminum electrolysis industry is investigated to demonstrate the advantages and the performance of the proposed approach
    corecore